Use Keras experiment to transfer style Watson Machine Learning icon
Icon

This notebook contains the steps and code required to demonstrate style transfer techique using Watson Machine Learning Service. This notebook introduces commands for getting data, training_definition persistance to Watson Machine Learning repository and model training.

Some familiarity with Python is helpful. This notebook uses Python 3 and Watson Studio environments.

Learning goals

In this notebook you learn to work with Watson Machine Learning experiments to train Deep Learning models (Keras).

Contents

  1. Set up
  2. Create the training definition
  3. Define the experiment
  4. Run the experiment
  5. Results
  6. Summary

1. Set up

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Create a Watson Machine Learning (WML) Service instance (a free plan is offered and information about how to create the instance is here).
  • Create a Cloud Object Storage (COS) instance (a lite plan is offered and information about how to order storage is here).
    Note: When using Watson Studio, you already have a COS instance associated with the project you are running the notebook in.
  • Create new credentials with HMAC:

    • Go to your COS dashboard.
    • In the Service credentials tab, click New Credential+.
    • Add the inline configuration parameter: {"HMAC":true}, click Add. (For more information, see HMAC.)

      This configuration parameter adds the following section to the instance credentials, (for use later in this notebook):

      "cos_hmac_keys": {
            "access_key_id": "-------",
            "secret_access_key": "-------"
       }

In this section:

1.1 Work with Cloud Object Storage (COS)

Import the Boto library, which allows Python developers to manage COS.

In [1]:
# Import the boto library
import ibm_boto3
from ibm_botocore.client import Config
import os
import json
import warnings
import urllib
import time
warnings.filterwarnings('ignore')

Authenticate to COS and define the endpoint you will use.

  1. Enter your COS credentials in the following cell. You can find these credentials in your COS instance dashboard under the Service credentials tab as described in the set up section.

  2. Go to the Endpoint tab in the COS instance's dashboard to get the endpoint information, for example: s3-api.us-geo.objectstorage.softlayer.net.

In [ ]:
# Enter your COS credentials.
cos_credentials = {
  "apikey": "***",
  "cos_hmac_keys": {
    "access_key_id": "***",
    "secret_access_key": "***"
  },
  "endpoints": "https://cos-service.bluemix.net/endpoints",
  "iam_apikey_description": "***",
  "iam_apikey_name": "***",
  "iam_role_crn": "crn:v1:bluemix:public:iam::::serviceRole:Writer",
  "iam_serviceid_crn": "***",
  "resource_instance_id": "***"
}

api_key = cos_credentials['apikey']
service_instance_id = cos_credentials['resource_instance_id']
auth_endpoint = 'https://iam.bluemix.net/oidc/token'
# Enter your Endpoint information.
service_endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'

Create the Boto resource by providing type, endpoint_url and credentials.

In [3]:
cos = ibm_boto3.resource('s3',
                         ibm_api_key_id=api_key,
                         ibm_service_instance_id=service_instance_id,
                         ibm_auth_endpoint=auth_endpoint,
                         config=Config(signature_version='oauth'),
                         endpoint_url=service_endpoint)

Create the buckets you will use to store training data and training results.

Note: Bucket names must be unique.

In [5]:
# Create two buckets, style-data-example and style-results-example
buckets = ['style-data-example-2', 'style-results-example-2']
for bucket in buckets:
    if not cos.Bucket(bucket) in cos.buckets.all():
        print('Creating bucket "{}"...'.format(bucket))
        try:
            cos.create_bucket(Bucket=bucket)
        except ibm_boto3.exceptions.ibm_botocore.client.ClientError as e:
            print('Error: {}.'.format(e.response['Error']['Message']))
Creating bucket "style-data-example-2"...
Creating bucket "style-results-example-2"...

You have now created two new buckets:

  • style-data-example-2
  • style-results-example-2

Display a list of buckets for your COS instance to verify that the buckets were created.

In [ ]:
# Display the buckets
print(list(cos.buckets.all()))

1.2 Download training data and upload it to COS buckets

Download your training data and upload them to the 'training-data' bucket. Then, create a list of links for the training dataset.

The following code snippet creates the STYLE_DATA folder and downloads the files from the links to the folder.

Tip: First, use the !pip install wget command to install the wget library:

In [ ]:
!pip install wget
In [7]:
import wget, os

# Create folder
data_dir = 'STYLE_DATA'
if not os.path.isdir(data_dir):
    os.mkdir(data_dir)

links = ['https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5',
         'https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg/1513px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg',
         'https://upload.wikimedia.org/wikipedia/commons/5/52/Krak%C3%B3w_239a.jpg',
         'https://upload.wikimedia.org/wikipedia/commons/3/3f/Kandinsky%2C_Lyrisches.jpg']

# Download the links to the folder
for i in range(len(links)):
    if 'Gogh' in links[i]: 
        filepath = os.path.join(data_dir, 'van_gogh.jpg')
    elif 'Krak' in links[i]: 
        filepath = os.path.join(data_dir, 'krakow.jpg')
    elif 'Kandinsky' in links[i]:
        filepath = os.path.join(data_dir, 'kandinsky.jpg')
    else:
        filepath = os.path.join(data_dir, os.path.join(links[i].split('/')[-1]))

    if not os.path.isfile(filepath):
        print(links[i])
        urllib.request.urlretrieve(links[i], filepath)

# List the files in the STYLE_DATA folder        
!ls STYLE_DATA
kandinsky.jpg  van_gogh.jpg
krakow.jpg     vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5

Base image: Cracow - main market square

In [8]:
from IPython.display import Image
Image(filename=os.path.join(data_dir, 'krakow.jpg'), width=1000)
Out[8]:

Style image 1: Vincent Van Gogh - Starry Night

In [9]:
Image(filename=os.path.join(data_dir, 'van_gogh.jpg'), width=500)
Out[9]:

Style image 2: Kandinsky Lyrisches

In [10]:
Image(filename=os.path.join(data_dir, 'kandinsky.jpg'), width=600)
Out[10]:

Upload the data files to the created buckets.

In [11]:
bucket_name = buckets[0]
bucket_obj = cos.Bucket(bucket_name)
In [12]:
for filename in os.listdir(data_dir):
    with open(os.path.join(data_dir, filename), 'rb') as data: 
        bucket_obj.upload_file(os.path.join(data_dir, filename), filename)
        print('{} is uploaded.'.format(filename))
kandinsky.jpg is uploaded.
krakow.jpg is uploaded.
van_gogh.jpg is uploaded.
vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5 is uploaded.

Let's see the list of all the buckets and their contents.

In [13]:
for obj in bucket_obj.objects.all():
    print('Object key: {}'.format(obj.key))
    print('Object size (kb): {}'.format(obj.size/1024))
Object key: kandinsky.jpg
Object size (kb): 337.97265625
Object key: krakow.jpg
Object size (kb): 2063.50390625
Object key: van_gogh.jpg
Object size (kb): 833.9755859375
Object key: vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5
Object size (kb): 78256.46875

You are done with COS, and you are now ready to train your model!

1.3 Work with the Watson Machine Learning instance

Load the libraries you need.

In [14]:
import urllib3, requests, json, base64, time, os

Authenticate to the Watson Machine Learning (WML) service on IBM Cloud.

Tip: Authentication information (your credentials) can be found in the Service credentials tab of the service instance that you created on IBM Cloud. If there are no credentials listed for your instance in Service credentials, click New credential (+) and enter the information required to generate new authentication information.

Action: Enter your WML service instance credentials here.

In [ ]:
wml_credentials = {
  "instance_id": "***"
  "password": "***",
  "url": "https://ibm-watson-ml.mybluemix.net",
  "username": "***",
}

Install the watson-machine-learning-client from pypi.

In [ ]:
!rm -rf $PIP_BUILD/watson-machine-learning-client
In [ ]:
!pip install --upgrade watson-machine-learning-client

Import the watson-machine-learning-client and authenticate to the service instance.

In [ ]:
from watson_machine_learning_client import WatsonMachineLearningAPIClient
In [19]:
client = WatsonMachineLearningAPIClient(wml_credentials)
In [20]:
print(client.version)
1.0.302

2. Create the training definitions

2.1 Prepare the training definition metadata

Hint: The final effect depends on number of iterations, and that the number of iterations impacts the training time.

In [21]:
#Set the number of iterations.
iters = 1
In [22]:
model_definition_1_metadata = {
            client.repository.DefinitionMetaNames.NAME: "style transfer van gogh",
            client.repository.DefinitionMetaNames.FRAMEWORK_NAME: "tensorflow",
            client.repository.DefinitionMetaNames.FRAMEWORK_VERSION: "1.5",
            client.repository.DefinitionMetaNames.RUNTIME_NAME: "python",
            client.repository.DefinitionMetaNames.RUNTIME_VERSION: "3.5",
            client.repository.DefinitionMetaNames.EXECUTION_COMMAND: "python style_transfer.py krakow.jpg van_gogh.jpg krakow --iter " + str(iters)
            }
In [23]:
model_definition_2_metadata = {
            client.repository.DefinitionMetaNames.NAME: "style transfer kandinsky",
            client.repository.DefinitionMetaNames.FRAMEWORK_NAME: "tensorflow",
            client.repository.DefinitionMetaNames.FRAMEWORK_VERSION: "1.5",
            client.repository.DefinitionMetaNames.RUNTIME_NAME: "python",
            client.repository.DefinitionMetaNames.RUNTIME_VERSION: "3.5",
            client.repository.DefinitionMetaNames.EXECUTION_COMMAND: "python style_transfer.py krakow.jpg kandinsky.jpg krakow --iter " + str(iters)
            }

2.2 Get the sample model definition content files from Git

In [24]:
!rm -rf STYLE.zip
In [25]:
filename_definition = 'STYLE.zip'

if not os.path.isfile(filename_definition):
    !wget https://github.com/pmservice/wml-sample-models/raw/master/keras/style/definition/STYLE.zip

!ls STYLE.zip
--2018-08-22 16:23:54--  https://github.com/pmservice/wml-sample-models/raw/master/keras/style/definition/STYLE.zip
Resolving github.com (github.com)... 192.30.253.112, 192.30.253.113
Connecting to github.com (github.com)|192.30.253.112|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/pmservice/wml-sample-models/master/keras/style/definition/STYLE.zip [following]
--2018-08-22 16:23:54--  https://raw.githubusercontent.com/pmservice/wml-sample-models/master/keras/style/definition/STYLE.zip
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.48.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.48.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3954 (3.9K) [application/zip]
Saving to: ‘STYLE.zip’

100%[======================================>] 3,954       --.-K/s   in 0s      

2018-08-22 16:23:54 (61.0 MB/s) - ‘STYLE.zip’ saved [3954/3954]

STYLE.zip

2.3 Store the training definition in the WML repository

Store definition 1

In [26]:
definition_details = client.repository.store_definition(filename_definition, model_definition_1_metadata)

definition_url = client.repository.get_definition_url(definition_details)
definition_uid = client.repository.get_definition_uid(definition_details)
print(definition_url)
https://us-south.ml.cloud.ibm.com/v3/ml_assets/training_definitions/688e3a0f-ea8c-4873-adc0-6bf00d04185d

Store definition 2

In [27]:
definition_2_details = client.repository.store_definition(filename_definition, model_definition_2_metadata)

definition_2_url = client.repository.get_definition_url(definition_2_details)
definition_2_uid = client.repository.get_definition_uid(definition_2_details)
print(definition_2_url)
https://us-south.ml.cloud.ibm.com/v3/ml_assets/training_definitions/b9e2d8f2-183d-4383-8c19-8be8c396816c

List the stored definitions

In [28]:
client.repository.list_definitions()
------------------------------------  ------------------------  ------------------------  ----------
GUID                                  NAME                      CREATED                   FRAMEWORK
b9e2d8f2-183d-4383-8c19-8be8c396816c  style transfer kandinsky  2018-08-22T16:24:12.287Z  tensorflow
688e3a0f-ea8c-4873-adc0-6bf00d04185d  style transfer van gogh   2018-08-22T16:24:06.914Z  tensorflow
------------------------------------  ------------------------  ------------------------  ----------

3. Create the experiment definition

Get a list of supported configuration parameters.

In [29]:
client.repository.ExperimentMetaNames.show()
--------------------------  ----  --------
META_PROP NAME              TYPE  REQUIRED
NAME                        str   Y
DESCRIPTION                 str   N
TAGS                        list  N
AUTHOR_NAME                 str   N
EVALUATION_METHOD           str   N
EVALUATION_METRICS          list  N
TRAINING_REFERENCES         list  Y
TRAINING_DATA_REFERENCE     dict  Y
TRAINING_RESULTS_REFERENCE  dict  Y
--------------------------  ----  --------

Create an experiment, which will train two models based on previously stored definitions.

In [30]:
TRAINING_DATA_REFERENCE = {
                            "connection": {
                                "endpoint_url": service_endpoint,
                                "access_key_id": cos_credentials['cos_hmac_keys']['access_key_id'],
                                "secret_access_key": cos_credentials['cos_hmac_keys']['secret_access_key']
                            },
                            "source": {
                                "bucket": buckets[0],
                            },
                            "type": "s3"
                        }
In [31]:
TRAINING_RESULTS_REFERENCE = {
                            "connection": {
                                "endpoint_url": service_endpoint,
                                "access_key_id": cos_credentials['cos_hmac_keys']['access_key_id'],
                                "secret_access_key": cos_credentials['cos_hmac_keys']['secret_access_key']
                            },
                            "target": {
                                "bucket": buckets[1],
                            },
                            "type": "s3"
                        }
In [32]:
experiment_metadata = {
            client.repository.ExperimentMetaNames.NAME: "STYLE experiment",
            client.repository.ExperimentMetaNames.TRAINING_DATA_REFERENCE: TRAINING_DATA_REFERENCE,
            client.repository.ExperimentMetaNames.TRAINING_RESULTS_REFERENCE: TRAINING_RESULTS_REFERENCE,
            client.repository.ExperimentMetaNames.TRAINING_REFERENCES: [
                        {
                            "name": "van gogh - cracow",
                            "training_definition_url": definition_url,
                            "compute_configuration": {"name": "k80x4"}
                        },
                        {
                            "name": "kandinsky - cracow",
                            "training_definition_url": definition_2_url,
                            "compute_configuration": {"name": "k80x4"}
                        },
                    ],
                }

Store the experiment in the WML repository.

In [33]:
# Store the experiment and display the experiment_uid.
experiment_details = client.repository.store_experiment(meta_props=experiment_metadata)

experiment_uid = client.repository.get_experiment_uid(experiment_details)
print(experiment_uid)
05556c93-f052-4c24-a4f0-e0d8c3ad4fa8

List the stored experiments.

In [34]:
client.repository.list_experiments()
------------------------------------  ----------------  ------------------------
GUID                                  NAME              CREATED
05556c93-f052-4c24-a4f0-e0d8c3ad4fa8  STYLE experiment  2018-08-22T16:26:09.427Z
------------------------------------  ----------------  ------------------------

Get the experiment definition details

In [35]:
details = client.repository.get_experiment_details(experiment_uid)

4. Run the experiment

Tip: To run the experiment in the background, set the optional parameter asynchronous=True (or remove it).

In [36]:
experiment_run_details = client.experiments.run(experiment_uid, asynchronous=False)

#########################################################

Running '05556c93-f052-4c24-a4f0-e0d8c3ad4fa8' experiment

#########################################################


Experiment run uid: 922c9e06-1c2d-4aed-ba25-bd3329b0667f

0%   - Processing training-CVIae-piR (1/2): experiment_state=pending, training_state=pending
0%   - Processing training-CVIae-piR (1/2): experiment_state=pending, training_state=running
0%   - Processing training-CVIae-piR (1/2): experiment_state=running, training_state=running
0%   - Processing training-CVIae-piR (1/2): experiment_state=completed, training_state=completed
100% - Finished processing training runs: experiment_state=completed


--------------------------------------------------------------------
Run of '05556c93-f052-4c24-a4f0-e0d8c3ad4fa8' finished successfully.
--------------------------------------------------------------------


As you can see, the experiment run has finished.

Get the experiment run UID.

In [37]:
experiment_run_id = client.experiments.get_run_uid(experiment_run_details)
print(experiment_run_id)
922c9e06-1c2d-4aed-ba25-bd3329b0667f

Get the run details.

The code in the following cell gets details about a particular experiment run.

In [38]:
run_details = client.experiments.get_run_details(experiment_run_id)

Get the experiment run status.

Call client.experiments.get_status(run_uid) to check the experiment run status. This is useful when you run an experiment in the background.

In [39]:
status = client.experiments.get_status(experiment_run_id)
print(status)
{'current_iteration': 1, 'submitted_at': '2018-08-22T16:26:29Z', 'state': 'completed', 'current_at': '2018-08-22T16:26:29Z'}

Monitor the experiment run.

Call client.experiments.monitor_logs(run_uid) to monitor the experiment run. This method streams the training logs content to the console.

client.experiments.monitor_logs(experiment_run_id)

List the training runs triggered by experiment run.

In [40]:
client.experiments.list_training_runs(experiment_run_id)
------------------  ------------------  ---------  --------------------  --------------------  -----------
GUID (training)     NAME                STATE      SUBMITTED             FINISHED              PERFORMANCE
training-CVIae-piR  kandinsky - cracow  completed  2018-08-22T16:26:32Z  2018-08-22T16:29:47Z  -
training-jWIaeatig  van gogh - cracow   completed  2018-08-22T16:26:32Z  2018-08-22T16:29:50Z  -
------------------  ------------------  ---------  --------------------  --------------------  -----------

As you can see, two training runs completed.

In [41]:
# List the training uids.
training_uids = client.experiments.get_training_uids(experiment_run_details)
print(training_uids)
['training-CVIae-piR', 'training-jWIaeatig']

5. Results - transferred styles images

In [42]:
bucket_name = buckets[1]
bucket_obj = cos.Bucket(bucket_name)
In [43]:
transfered_images = []

for uid in training_uids:
    obj = bucket_obj.Object(uid + '/transfered_images/krakow_at_iteration_' + str(iters-1) + '.png')
    filename = 'krakow_transfered_' + str(uid) + '.jpg'
    transfered_images.append(filename)
    with open(filename, 'wb') as data:
        obj.download_fileobj(data)
    print(filename)
krakow_transfered_training-CVIae-piR.jpg
krakow_transfered_training-jWIaeatig.jpg

Cracow

Have a look at the original picture again.

In [44]:
Image(filename=os.path.join(data_dir, 'krakow.jpg'), width=1000)
Out[44]:

Cracow + Van Gogh

Display the picture after Van Gogh style has been applied.

In [45]:
Image(filename=transfered_images[0], width=1000)
Out[45]:

Cracow + Kandinsky

Display the picture after Kandinsky style has been applied.

In [46]:
Image(filename=transfered_images[1], width=1000)
Out[46]:

6. Summary

You successfully completed this notebook! You learned how to use watson-machine-learning-client to run experiments. Check out our:

Citations

Author

Lukasz Cmielowski, PhD, is an Automation Architect and Data Scientist at IBM with a track record of developing enterprise-level applications that substantially increases clients' ability to turn data into actionable knowledge.

Copyright © 2018 IBM. This notebook and its source code are released under the terms of the MIT License.

Love this notebook? Don't have an account yet?
Share it with your colleagues and help them discover the power of Watson Studio! Sign Up